Frontiers in Neurology
○ Frontiers Media SA
Preprints posted in the last 7 days, ranked by how well they match Frontiers in Neurology's content profile, based on 91 papers previously published here. The average preprint has a 0.25% match score for this journal, so anything above that is already an above-average fit.
Yang, D.; Li, G.; Song, J.; Shi, X.; Xu, X.; Ma, J.; Guo, C.; Liu, C.; Yang, J.; Li, F.; Zhu, Y.; Zi, W.; Ding, Q.; Chen, Y.
Show abstract
Abstract Background: Acute ischemic stroke (AIS) remains a significant cause of disability worldwide. Current treatments, primarily intravenous thrombolysis (IVT), are limited by narrow time windows and reperfusion injury, leading to suboptimal outcomes for many patients. Chuanzhi Tongluo (CZTL), a traditional Chinese medicine, has been preliminarily recognized as a novel cerebral protection agent in animal models. Objectives: This trial investigates the efficacy and safety of CZTL capsule in patients with AIS who are not eligible for IVT or who experience early neurological deterioration after IVT. Methods and design: The CONCERN trial is an investigator-initiated, prospective, multicenter, double-blind, parallel-control, randomized clinical study in China. An estimated 1,208 eligible participants will be consecutively randomized to receive CZTL capsule therapy or placebo in 1:1 ratio across approximately 70 stroke centers in China. All enrolled patients are orally administered 2 capsules of CZTL or placebo 3 times a day together with antiplatelet agents for 3 months. Outcomes: The primary endpoint is an excellent functional outcome, defined as a score of 0 or 1 on the mRS at 90 days. Lead safety endpoints included 90-day mortality and symptomatic intracranial hemorrhage within 48 hours. Conclusions: Results of CONCERN trial will determine the clinical efficacy and safety of the traditional Chinese medicine CZTL capsule in the treatment of AIS patients. Trial registry number: ChiCTR2300074147 (www.chictr.org.cn).
Khorsand, B.; Teichrow, D.; Lipton, R. B.; Ezzati, A.
Show abstract
ObjectiveTo describe the design, feasibility, and baseline characteristics of the Migraine Impact on Neurocognitive Dynamics (MIND) study, a 30-day smartphone-based cohort for high-frequency assessment of cognition and symptoms in adults with migraine. BackgroundCognitive symptoms are an important component of migraine burden, but they are difficult to measure using single-visit testing or retrospective questionnaires. Repeated smartphone-based assessment may better capture real-world variability in cognition and symptoms. MethodsAdults meeting International Classification of Headache Disorders, 3rd edition, criteria for migraine were enrolled remotely and completed 30 days of once-daily ecological momentary assessments and mobile cognitive tasks delivered through the Mobile Monitoring of Cognitive Change platform. Baseline measures assessed demographics, migraine characteristics, disability, mood, stress, and treatment patterns. Feasibility was evaluated using enrollment, completion, and retention metrics. ResultsA total of 177 participants enrolled (mean age 38.8 {+/-} 11.9 years; 79.7% female), including 80/177 (45.2%) with chronic migraine. Across the 30-day protocol, 3688 daily assessments were completed, representing 70.8% of all possible study days, and 70.6% of participants completed at least 20 days of monitoring. Completion remained above 60% across study days. At baseline, chronic migraine was associated with greater burden than low-frequency and high-frequency episodic migraine, including higher MIDAS scores (98.6 vs. 38.7 and 70.3), more days with concentration difficulty (16.0 vs. 7.9 and 11.5), and more days with functional interference (18.5 vs. 7.6 and 13.0). ConclusionsThe MIND study demonstrates the feasibility of high-frequency smartphone-based assessment of cognition and symptoms in migraine and provides a methodological foundation for future analyses of within-person cognitive and symptom dynamics across the migraine cycle.
Kmiecik, M. J.; O'Brien, L.; Szpyhulsky, M.; Iodice, V.; Freeman, R.; Jordan, J.; Biaggioni, I.; Kaufmann, H.; Vickery, R.; Miller, A.; Saunders, E.; Rushton, E.; Valle, L.; Norcliffe-Kaufmann, L.
Show abstract
BackgroundAlthough neurogenic orthostatic hypotension (nOH) is a common and debilitating feature of multiple system atrophy (MSA), little is known about the burden of symptoms in the real world. ObjectivesTo design and conduct a cross-sectional community-based research survey targeting patients with MSA, with and without nOH. MethodsWe recruited patients with MSA to complete an anonymous online survey covering three core themes: 1) timely diagnosis, 2) nOH pharmacotherapy and refractory symptoms, and 3) confidence in physician knowledge. Responses were grouped by pre-specified diagnostic certainty levels. Relationships between symptoms, function, and pharmacotherapy were assessed using univariate and multivariate methods. ResultsWe analyzed 259 respondents with a self-reported diagnosis of MSA (age: M=64.38, SD=8.09 years; 44% female). In total, 42% also had a diagnosis nOH; 40% had symptoms highly suspicious of nOH, but no diagnosis; and 21% reported having never had their blood pressure measured in the standing position at a clinical visit. Treatment with a pressor agent was independently associated with the presence of other symptoms of autonomic failure. Each additional nOH symptom reported increased the odds of requiring pharmacotherapy by 18%. Yet, despite anti-hypotensive medication use, 97% of patients reported limitations in their ability to bathe, cook, or arise from a chair/bed with 76% needing caregiver support for refractory nOH symptoms. ConclusionsThis cross-sectional representative sample shows nOH is underrecognized and undertreated in MSA patients, leading to substantial functional limitations. It is our hope that these findings are leveraged for planning future trials and advocating for better treatments.
Jansen, C.; Stalter, J.; Reuter, S.; Witt, K.
Show abstract
BackgroundAccelerated long-term forgetting (ALF), defined as an increased rate of memory loss over extended intervals, has so far been detected in a pilot study of patients with mild multiple sclerosis (MS). This study aimed to (I) confirm the presence of ALF in a larger, heterogeneous MS sample, (II) explore associations with patient-reported outcomes, and (III) assess the diagnostic performance of ALF tests for subjective memory impairment. MethodsThis study compared 62 MS patients and 65 age-, sex-, and education-matched healthy controls using standardized memory tests (RAVLT, WMS-IV Logical Memory subtest). Recall was assessed immediately, after 30 minutes, and after 7 days. Seven-day/30-minute recall ratios (QRAVLT, QWMS) served as primary outcomes. Self-report measures included memory complaints, fatigue, depression, and sleep disturbances. Linear regression and Receiver operating characteristic (ROC) analyses assessed predictors and diagnostic accuracy. ResultsALF was observed in multiple sclerosis since QRAVLT was lower in patients than in controls (0.64 [95% CI 0.59-0.69] vs. 0.78 [0.73-0.82], p < 0.001), as was QWMS (0.79 [95% CI 0.74-0.84] vs. 0.95 [0.90-1.00], p < 0.001), despite comparable initial learning. Greater fatigue, higher memory complaints, longer disease duration, older age, and greater disability were associated with lower ALF scores. The combined ALF score moderately discriminated subjective memory impairment (AUC 0.74; sensitivity 0.73; specificity 0.73). ConclusionMS patients showed ALF despite normal initial learning, indicating a specific memory deficit undetected by standard tests. Long-delay recall using RAVLT and WMS-IV Logical Memory subtest may improve cognitive impairment detection in MS.
Graure, M.; Nierobisch, N.; De Vere-Tyndall, A. J.; Pakeerathan, T.; Ayzenberg, I.; Gernert, J.; Havla, J.; Ringelstein, M.; Aktas, O.; Tkachenko, D.; Huemmert, M.; Trebst, C.; Cedra Fuertes, N. A.; Papadopoulou, A.; Giglhuber, K.; Wicklein, R.; Berthele, A.; Weller, M.; Kana, V.; Roth, P.; Herwerth, M.
Show abstract
BackgroundChronic relapsing inflammatory optic neuropathy (CRION) is a steroid-dependent form of optic neuritis with incompletely understood pathophysiology. The identification of myelin oligodendrocyte glycoprotein antibodies (MOG-IgG) in a substantial patient subset has challenged the diagnostic and therapeutic management. The aim of this study was to investigate clinical profiles and treatment outcomes of patients with CRION, comparing MOG-IgG-positive (MOG+) and seronegative (MOG-) subgroups. MethodsPatients from six European tertiary centers fulfilling diagnostic criteria for CRION were included. All underwent cell-based autoantibody testing. Clinical outcomes (visual acuity, annualized relapse rate), laboratory and imaging findings (MRI, OCT), and treatment responses were retrospectively analyzed. ResultsSixty patients were included (median age 33 years; 70% female); 27 (45%) were MOG+. MOG+ CRION was associated with later onset, higher ARR before treatment (median [IQR] 2 [1-3] vs. 1 [1-2], p = 0.023), and a trend toward shorter inter-relapse intervals. Additional distinguishing features included higher frequencies of antinuclear antibody positivity, elevated CSF interleukin-6, and extensive optic neuritis on MRI. Relapse burden correlated with visual acuity decline and retinal thinning. In MOG+ patients, monoclonal antibody therapy reduced the ARR (n = 21; 2 [1-3] vs. 0 [0-2], p = 0.024), primarily driven by tocilizumab (n = 11; 2 [1-3] vs. 0 [0-1], p = 0.023). In MOG-patients, rituximab and azathioprine showed a trend toward ARR reduction. ConclusionCRION represents a heterogeneous syndrome encompassing distinct subgroups. MOG+ patients demonstrate higher disease activity but respond favorably to tocilizumab. Serological testing is critical for treatment stratification and preventing relapses.
Bovis, F.; Montobbio, N.; Signori, A.; Kalincik, T.; Arnold, D. L.; Tintore, M.; Kappos, L.; Sormani, M. P.
Show abstract
Disability worsening is the critical long-term outcome in multiple sclerosis, yet the Expanded Disability Status Scale incompletely captures neurological deterioration and has limited sensitivity in the short time windows of clinical trials. Composite endpoints incorporating functional measures have been proposed to address these limitations, but whether they reliably improve detection of treatment effects has not been established across trials. We conducted a post-hoc analysis of individual patient data from ten phase III randomised controlled trials (ASCEND, BRAVO, CONFIRM, DEFINE, EXPAND, INFORMS, OLYMPUS, OPERA I/II, and ORATORIO; n = 9,369), spanning relapsing-remitting and progressive multiple sclerosis. Confirmed disability worsening was defined using harmonised criteria with the msprog package and confirmed at 24 weeks. Treatment effects were estimated using Cox proportional hazards models and combined across trials in a one-stage individual patient data framework. Composite endpoints were constructed from the Expanded Disability Status Scale, the timed 25-foot walk test, and the nine-hole peg test using logical unions (OR-type), intersections (AND-type), and majority-vote structures. Sensitivity to treatment effect was quantified using Z-scores (the ratio of the pooled log-hazard ratio to its standard error) and compared to the Expanded Disability Status Scale reference using interaction tests. Event rates varied across components: the timed walk test generated the highest rates (up to 46.8%) while the nine-hole peg test generated the lowest (as low as 2.1%). OR-type composite endpoints showed weaker treatment effects than the Expanded Disability Status Scale alone, with the largest reductions in sensitivity observed for endpoints incorporating the timed walk test ({Delta}Z up to +2.26; interaction p = 0.004). These findings were confirmed across disease subtypes and were pronounced in relapsing-remitting trials, where no composite endpoint outperformed the Expanded Disability Status Scale. In progressive multiple sclerosis, the combination of the Expanded Disability Status Scale and the nine-hole peg test showed numerically stronger treatment effects ({Delta}Z = -1.65), though interaction tests did not reach statistical significance (p = 0.051). Composite endpoints do not systematically improve treatment effect detection in multiple sclerosis trials. Increased event capture driven by the timed walk test introduces noise that dilutes the treatment signal rather than amplifying it, highlighting that event rate and endpoint quality are not interchangeable. Upper limb function assessed by the nine-hole peg test provides complementary and specific information, particularly in progressive disease. The combination of global disability and upper limb measures represents a promising direction for future endpoint development in progressive multiple sclerosis trials, warranting validation.
Kurtz, J.; Billot, A.; Falconer, I.; Small, H.; Charidimou, A.; Kiran, S.; Varkanitsa, M.
Show abstract
BackgroundTheory of Mind (ToM) deficits are well-documented in right-hemisphere stroke but remain understudied in post-stroke aphasia. Prior work suggests that performance on tasks assessing ToM may be relatively preserved in aphasia and dissociable from language impairment, but these findings are based largely on small studies. This study examined performance on nonverbal false-belief tasks in post-stroke aphasia, its relationship with aphasia severity, and whether vascular brain health, operationalized using cerebral small vessel disease (CSVD) markers, contributed to variability in performance. MethodsForty-four individuals with aphasia completed two nonverbal belief-reasoning tasks assessing spontaneous perspective-taking and self-perspective inhibition. Task accuracy served as the primary outcome. Linear regression models examined associations between task performance, aphasia severity (Western Aphasia Battery-Revised Aphasia Quotient), and CSVD markers, including white matter hyperintensities, cerebral microbleeds, lacunes and enlarged perivascular spaces in the basal ganglia and centrum semiovale. ResultsPerformance was heterogeneous across tasks, with reduced performance observed in 23% of participants on the Reality-Unknown task and 36% on the Reality-Known task. Aphasia severity was not associated with task accuracy. Greater cerebral microbleed count was associated with lower accuracy on both tasks, while greater basal ganglia enlarged perivascular spaces burden showed a more selective association with lower performance. ConclusionsPerformance on nonverbal false-belief tasks in aphasia is variable and not explained by aphasia severity alone. These findings suggest that apparent ToM-related difficulties in aphasia may be shaped by broader vascular brain health, supporting a more multidimensional framework for interpreting social-cognitive task performance after stroke.
Lee, K.-J.; Lee, J.-Y.; Lee, S. J.; Bae, H.-J.; Sung, J.
Show abstract
Background: Type 2 diabetes mellitus (T2DM) has long been considered a risk factor for cerebral small vessel disease (cSVD), yet the exact relationship between glycemic markers and cSVD remains unclear. This study explores the genetic overlap and causal associations between T2DM, glycemic indices, and cSVD phenotypes using genome-wide association studies (GWAS). Methods: Using large consortium-based GWAS data, we examined relationships between T2DM, glycemic indicators (glycated hemoglobin, fasting glucose, 2-hour glucose after oral challenge, and fasting insulin), and cSVD phenotypes (white matter hyperintensity volume, lacunar stroke, cerebral microbleeds, and enlarged perivascular spaces). Our multi-level genomic strategy included: 1) identifying pleiotropic single nucleotide polymorphisms (SNPs) through PLEIO and eQTL analysis, 2) assessing genome-wide genetic correlations using LDSC and GNOVA, and 3) determining causal relationships with two-sample and multivariable Mendelian randomization analyses. Results: We identified 14 pleiotropic SNPs with significant shared associations among T2DM, glycemic indicators, and cSVD phenotypes. Notably, MICB gene expression was elevated in brain, vascular, and pancreatic tissues, while three HLA genes (HLA-DQA1, HLA-DRB1 and HLA-DRB5) showed reduced expression. Genetic correlation analysis revealed positive correlations between T2DM, fasting glucose, and postprandial glucose with multiple cSVD phenotypes including WMH, lacunar stroke, and perivascular spaces. Mendelian randomization demonstrated that T2DM, 2-hour glucose, and HbA1c level causally increased lacunar stroke risk (OR 1.16 [1.09-1.23], OR 1.46 [1.20-1.77], OR 1.52 [1.04-2.23], respectively). Multivariable Mendelian randomization analysis confirmed that T2DM and postprandial glucose maintained a robust direct effect on lacunar stroke independent of other cSVD phenotypes, while HbA1c did not retain significance after conditioning on cSVD imaging markers. Conclusions: Our multi-level genomic analysis reveals links between T2DM, glycemic traits, and cSVD through specific genetic variants, genome-wide correlations, and causal relationships. The involvement of immune-related genes suggests potential biological mechanisms. The causal effect of postprandial glucose on lacunar stroke suggests that impaired glucose tolerance may be a relevant therapeutic target for lacunar stroke prevention.
Khan, M. H.; Chakraborty, S.; Marin-Pardo, O.; Barisano, G.; Borich, M. R.; Cole, J. H.; Cramer, S. C.; Fokas, E. E.; Fullmer, N. H.; Hayes, L.; Kim, H.; Kumar, A.; Rosario, E. R.; Schambra, H. M.; Schweighofer, N.; Taga, M.; Winstein, C.; Liew, S.-L.
Show abstract
Post-stroke cognitive recovery is difficult to predict using focal lesion characteristics alone. The brain's capacity to maintain cognitive function depends also on structural integrity of the whole brain. One way to measure brain health is through the severity of cerebral small vessel disease (CSVD) markers, which reflect aging-related pathologies that erode structural integrity. Here, we propose a composite measure of CSVD (cCSVD) integrating three independently validated biomarkers automatically quantified using T1-weighted MRIs: white matter hyperintensity volume (WMH; representing vascular injury), perivascular space count (PVS; putative glymphatic clearance), and brain-predicted age difference (brain-PAD; structural atrophy). We hypothesize that cCSVD, which captures the shared variance across these CSVD biomarkers, will be a robust indicator of whole-brain structural integrity and predict cognitive changes 3 months after stroke. We analyzed 65 early subacute stroke survivors with assessments within 21 days (baseline) and at 90 days (follow-up) post-stroke. WMH volume, PVS count, and brain-PAD were quantified from baseline T1-weighted MRIs, and then residualized for age, sex, days since stroke, and intracranial volume. Principal component analysis (PCA) of the residualized biomarkers was used to derive cCSVD. Beta regression with stability selection using LASSO was used to model three outcomes: baseline Montreal Cognitive Assessment (MoCA) scores, follow-up MoCA scores, and longitudinal change (follow-up score adjusted for baseline score). Logistic regression was used to test if baseline cCSVD predicted improvement in those with baseline cognitive impairment (MoCA < 26). The PCA revealed that the first principal component (PC1) explained 43.1% of the total variance among WMH volume, PVS count, and brain-PAD. The three biomarkers contributed nearly equally to PC1, which was subsequently used as the baseline cCSVD score. Lower baseline cCSVD was significantly associated with better MoCA scores at follow-up ({beta} = -0.19, p = 0.009), even after adjusting for baseline MoCA ({beta} = -0.12, p = 0.042), and, importantly, outperformed all individual biomarkers. Furthermore, lower cCSVD at baseline significantly increased the likelihood of improving to cognitively unimpaired status at three months (OR = 0.34, p = 0.036), independent of age and education. The composite CSVD captures the additive impact of vascular injury, glymphatic dysfunction, and structural atrophy on recovery in a way that individual measures do not. cCSVD accounts for shared variance across these domains, reflecting a patient's latent capacity for cognitive recovery, where relative integrity in one CSVD domain may mitigate effects of another. This automated, T1-based framework offers a scalable tool for predicting post-stroke recovery.
Bombaci, A.; Iadarola, A.; Giraudo, A.; Fattori, E.; Sinagra, S.; Magnino, A.; Calvo, A.; Chio', A.; Cicolin, A.
Show abstract
Background: Sleep wake and circadian disturbances are increasingly recognised in people living with amyotrophic lateral sclerosis (plwALS), but endogenous circadian phase timing and its prognostic significance in early disease remain unclear. We assessed whether salivary dim-light melatonin onset (DLMO), an objective marker of central circadian phase, is altered in early plwALS and whether it provides prognostic information. Methods: In this prospective longitudinal observational study, plwALS within 18 months of symptom onset underwent home-based salivary melatonin sampling under dim light conditions at six predefined time points around habitual sleep onset (HSO). Melatonin profiles were modeled using cubic smoothing splines, and DLMO was defined as the first time the fitted curve reached 3 pg/mL. Clinical, respiratory, and sleep assessments were collected at baseline (T0) and after 6 months (T6); a subgroup repeated saliva sampling at T6. Age and sex matched controls underwent melatonin profiling. Associations with disease progression, incident respiratory symptoms, and survival/tracheostomy were examined using regressions and survival analyses. Results: Fifty plwALS were enrolled. Compared with controls, plwALS showed an earlier DLMO (20:24 vs 20:58; p=0.028) despite similar HSO and chronotype. Within ALS cohort, a later baseline DLMO correlated with worse functional/motor status, faster progression of disease, incident dyspnea/orthopnea by T6 (adjusted OR 3.02; p=0.017), and poorer survival/tracheostomy-free outcome. In re-sampled subgroup (n=28), DLMO and other melatonin-derived metrics did not change over 6 months. Conclusions: Circadian phase alterations are detectable in early ALS. Baseline DLMO may represent a non-invasive prognostic biomarker for progression, respiratory symptom emergence and survival, warranting validation in larger multicentre cohorts.
Ollila, H. M.; Eghtedarian, R.; Haapaniemi, H.; Ramste, M.; FinnGen,
Show abstract
Background: Narcolepsy is a debilitating sleep disorder caused by hypocretin deficiency. Aside from its role to induce wakefulness, hypocretin is linked to modulated appetite and metabolism, often resulting in weight gain. Study objectives: We aimed to unravel the comprehensive epidemiological connection between narcolepsy and major cardiometabolic outcomes. Methods: We analyzed cardiovascular and metabolic disease distribution in the FinnGen study. Using longitudinal electronic health records, we assessed associations between narcolepsy, cardiac/metabolic markers, and prescriptions for relevant drugs. Results: Our findings demonstrate significant associations between narcolepsy and metabolic traits (OR [95% CI] = 2.65 [1.81, 3.89]) as well as stroke (OR = 2.36 [1.38, 4.04]). Narcolepsy patients exhibit a less favourable metabolic profile, including higher glucose levels (OR = 1.1143 [1.0599, 1.1715]) and dyslipidaemia. This is supported by increased prescriptions of insulin (OR = 2.269 [1.46, 3.53]), simvastatin (OR = 2.292 [1.59, 3.31]), and metformin (OR = 2.327 [1.66, 3.25]), reflecting high metabolic disturbances. Furthermore, positive associations with antihypertensive and antiplatelet medications were observed, consistent with elevated cardiovascular risk. Conclusion: Taken together, our findings highlight the cardiometabolic burden in narcolepsy. This study enhances understanding of the metabolic and cardiovascular consequences of narcolepsy and offers timely guidance for effective disease control.
Chen, Y.; Law, Z. K.; Zhou, X.; Dai, Q.; Xiang, S.; Xiao, X.; Ma, J.; Feng, M.; Peng, W.; Zhou, S.; Chen, L.; Zhou, Y.; Lai, Y.; Yeo, L.; An, S.; He, Y.; Pan, S.-Y.
Show abstract
Abstract Objective: To compare the safety and efficacy of bridging intravenous thrombolysis (IVT) plus endovascular thrombectomy (EVT) versus direct EVT in patients with acute ischemic stroke (AIS) due to anterior circulation large vessel occlusion (LVO) treated within the 6- to 24-hour time window. Methods: This is a retrospective analysis of prospective EVT registry from 10 comprehensive stroke centers in China and Singapore between 2019 and 2024. Eligible patients had anterior circulation LVO, underwent EVT within 6-24 hours of onset, had ASPECTS 6, NIHSS 6, and pre-stroke mRS 2. Patients were stratified into bridging IVT + EVT (IVT group) versus direct EVT alone (non-IVT group). Propensity score matching (1:2 ratio) was performed to balance baseline covariates. The primary outcome was 3-month favorable functional outcome (mRS 0-2). Secondary outcomes included successful recanalization (mTICI 2b-3), symptomatic intracranial hemorrhage (sICH), hemorrhagic transformation (HT) and 3-month mortality. In the matched cohort, binary outcomes were compared using the Cochran-Mantel-Haenszel test. Results: Of 772 included patients, 110 (14.2%) received bridging IVT and 662 (85.8%) received direct EVT. After propensity score matching, 202 non-IVT patients were matched to 101 IVT patients, with all covariates well-balanced (absolute SMD <0.10). In the matched cohort, bridging IVT was not associated with a significant difference in 3-month favorable outcome (44.55% vs. 47.03%; common OR 0.91; 95% CI 0.56-1.46), successful recanalization (91.09% vs. 90.10%; OR 1.11; 0.51-2.44), sICH (5.94% vs. 9.41%; OR 0.61; 0.24-1.58), HT (23.76% vs. 23.27%; OR 1.03; 0.57-1.85), or 3-month mortality (15.84% vs. 13.37%; OR 1.22; 0.62-2.37). Conclusion: In this large multicenter propensity score-matched analysis, bridging intravenous thrombolysis before endovascular thrombectomy in the 6- to 24-hour time window was not significantly associated with improved efficacy or increased safety risks compared with direct endovascular therapy alone.
Kancheva, I. K.; Voigt, S.; Munting, L.; van Dis, V.; Koemans, E.; van Osch, M. J. P.; Wermer, M. J. H.; Hirschler, L.; van Walderveen, M.; Weerd, L. v. d.
Show abstract
A prominent radiological manifestation of cerebral amyloid angiopathy (CAA) is enlargement of perivascular spaces (EPVS), which is suggested to result from fluid stagnation due to impaired perivascular clearance. Here, we report a novel observation of hypointense rims in cerebral white matter surrounding EPVS near haemorrhages on in vivo 7T Gradient Echo MRI. We hypothesised that the observed black rim pattern denotes iron accumulation that may be caused by incomplete clearance following bleeding. We investigated the occurrence and localisation of this marker on in vivo and ex vivo MRI and examined its histopathological correlates. From MRI data of the prospective longitudinal natural history study of hereditary Dutch-type CAA (D-CAA) at Leiden University Medical Centre, we selected the first 20 consecutive patients who underwent 7T imaging and assessed the presence of black rims on MRI. Post-mortem material was available from one donor with black rims on in vivo scans. Formalin-fixed coronal brain slabs were scanned at 7T MRI, including a high-resolution T2*-weighted sequence. Guided by ex vivo MRI, tissue blocks from representative areas with black rims were sampled for histopathological analysis. Serial sections were stained for iron, calcium, myelin, and general tissue morphology. On in vivo 7T MRI, 9 out of 20 participants exhibited one or several black rims, all located close to a haemorrhage. In the D-CAA donor, ex vivo MRI signal loss matched the in vivo contrast changes. Thirty-six vessels with ex vivo-observed black rims were retrieved and histopathologically examined, showing iron accumulation surrounding perivascular spaces, but the pattern and severity of iron deposition varied. Across groups, vessels displayed microvascular degeneration, including hyaline vessel wall thickening, adventitial fibrosis, and perivascular inflammation. We identified black rims on in vivo 7T MRI and confirmed their correspondence on ex vivo imaging. Iron deposition was determined as the underlying correlate of black rims, but the histopathology appears heterogeneous. The preferential deposition of iron around EPVS may indicate incomplete clearance of iron-positive blood-breakdown products after bleeding. The varied pattern of iron accumulation and microvascular alterations may reflect different pathophysiological mechanisms related to the formation and maintenance of black rims in D-CAA.
Malara, P.; Tosin, A. G.; Castellucci, A.; Martellucci, S.; Musumano, L. B.; Mandala, M.
Show abstract
An increasing number of studies highlight the role of saccadic remodulation in compensatory mechanisms following vestibular injury, and the reappearance of SHIMP saccades correlates with symptom improvement measured by the Dizziness Handicap Inventory (DHI). To investigate the influence of attentional processes and working memory on visuo-vestibular interaction, three independent but interrelated experiments were conducted. In the first two experiments, healthy subjects and patients with unilateral or bilateral vestibular deficits underwent vHIT in SHIMP mode and the Functional Head Impulse Test (fHIT), performed first separately and subsequently simultaneously. Mean latency and clustering of SHIMP saccades, together with Landolt C recognition rates, were analyzed. Differences between separate and combined protocols were assessed, and, in patients, correlated with symptom severity measured by the DHI, to determine whether the near-simultaneous execution of tasks mediated by shared parietal cortical substrates influenced performance. In the third experiment, vHIT in HIMP mode and fHIT were performed using separate and combined protocols to evaluate whether recognition-related cognitive load affected recovery saccade latency and clustering. Results suggest that visual recognition modulates visuo-vestibular interaction, supporting integrated dual-task protocols for ecological balance assessment and helping explain clinical discrepancies.
Hosking, A.; Iveson, M. H.; Sherlock, L.; Mukherjee, M.; Grover, C.; Alex, B.; Parepalli, S.; Mair, G.; Doubal, F.; Whalley, H. C.; Tobin, R.; Wardlaw, J. M.; Al-Shahi Salman, R.; Whiteley, W. N.
Show abstract
Background Outcome after stroke varies according to stroke subtype by location, but healthcare systems data studies do not include subtyping information. We linked natural language processing (NLP) of brain imaging reports to routinely collected data to estimate risk of death and other outcomes after stroke subtypes in a nationwide dataset. Methods We applied a previously validated NLP algorithm to all CT and MRI head scan reports in Scotland between 2010 and 2018. We linked the reports to hospital readmissions, prescriptions and death data to identify and characterize people with stroke, and to categorize into deep and cortical ischemic stroke, deep and lobar intracerebral hemorrhage (ICH), subarachnoid hemorrhage, and subdural hemorrhage. We used a matched cohort design, and age- and sex-matched four controls per case who never had a stroke. By subtype, we estimated rehospitalization with stroke, myocardial infarction (MI), cancer, dementia, epilepsy and death, accounting for confounders and competing risk of death. Results From 785,331 people with a head scan, we identified 64,219 with clinical stroke phenotypes (mean age 73.4yrs, 49.5% male), and subtyped 12,616 with deep ischaemic stroke; 14,103 with cortical ischaemic stroke; 1,814 with deep ICH; and 1,456 with lobar ICH. There was higher absolute rate of 1-year hospital readmission for lobar compared with deep ICH (4.9% [95%CI 3.9% - 6.1%] vs 3.4% [2.6% - 4.3%]), higher risk of dementia beyond 6 months after lobar ICH compared to controls than for other stroke subtypes (aHR 3.5 [2.3-5.3]); and higher risk of MI within 6 months of cortical ischemic stroke than for other stroke subtypes (aHR 4.6 [3.4-6.3]). Conclusions NLP of free-text reports linked to coded data successfully subtyped stroke at scale, and we estimated risk of clinically relevant outcomes. Future work should use free text to enable large-scale audit and epidemiology of people with stroke.
Sankaranarayanan, M.; Donahue, M. A.; Brooks, J. D.; Sun, S.; Newhouse, J. P.; Blacker, D.; Haneuse, S.; Hernandez-Diaz, S.; Moura, L. M. V. R.
Show abstract
ObjectiveLevetiracetam is commonly prescribed for seizure prophylaxis after acute ischemic stroke (AIS) and often continued beyond discharge. While its short-term effectiveness for preventing post-stroke seizures is established, it is unclear whether prolonged use improves survival, particularly in older adults. We estimated the effect of continued levetiracetam use on 90-day mortality among Medicare beneficiaries after AIS. MethodsUsing Traditional Medicare claims data (2008-2021), we identified beneficiaries aged [≥]66 years hospitalized for AIS who initiated outpatient levetiracetam within 90 days of discharge. After one month of continued post-stroke use of levetiracetam (start of follow-up), we compared 90-day mortality between patients with a new levetiracetam dispensation within a 14-day grace period post-follow up and those without one. We performed cloning, censoring and weighting to address immortal time bias and estimated standardized mortality risks, risk differences, and 95% confidence intervals (CI). ResultsAmong 3,212 eligible beneficiaries, 1,779 (55.4%) received a new levetiracetam dispensation within the 14-day grace period. Median age was 76 years (IQR 70-83); 57.8% were female. After adjustment for demographics, hospitalization characteristics, timing of initiation, and comorbidities, continued use was associated with lower 90-day mortality than discontinuation (53 vs 62 deaths per 1,000; risk difference -9 per 1,000; 95% CI: (-12,-5)). The reduction was observed primarily among patients aged [≥]75 years. SignificanceAmong older Medicare beneficiaries who initiated levetiracetam after AIS, continued outpatient use was associated with modestly lower 90-day mortality, particularly in those aged [≥]75 years. These findings suggest potential benefits of levetiracetam continuation beyond the immediate post-stroke period.
Khan, M.; Islam, A. M.; Abdel-Aty, Y.; Rosow, D.; Mallur, P.; Johns, M.; Rosen, C. A.; Bensoussan, Y. E.
Show abstract
ObjectiveOnly preliminary investigations on the use of the 445 nanometer wavelength blue light laser (BLL) for various laryngeal pathologies have been described. Currently, no standard exists for reporting treatment technique and tissue effect with this modality. Here, we aim to establish and validate a classification system to describe laser-induced tissue effects. Study DesignRetrospective video-based study for classification development and reliability validation. MethodsVideo recordings from procedures performed with the BLL by multiple academic laryngologists were retrospectively reviewed. A preliminary 6-point classification (BLL 1-6) was developed based on expert consensus. Thirteen additional procedural clips were independently rated utilizing the classification schema to assess perceived tissue effect, and measure inter- and intra-rate reliability. ResultsThe final 5-point classification system (BLL 1-5) included angiolysis, blanching, tissue vaporization, ablation with mechanical tissue removal, and cutting. The consensus of the combined reviewers in rating all cases was 89% (58 of 65). Complete consensus was not achieved in 11% (7/65) of cases. Of those incorrect, 57% (4/7) were of clips illustrating the BLL-2 classification. Intra-rater reliability amongst the reviewers was 100%. ConclusionTissue effect of the 445 nm blue light laser can reliably be standardized with this proposed classification system. This rating system can be used to facilitate future systematic study of outcomes and effective communication between laryngologists and trainees.
Houle, T. T.; Lebowitz, A.; Chtay, I.; Patel, T.; McGeary, D. D.; Turner, D. P.
Show abstract
ImportanceMigraine attacks often occur unpredictably, limiting the ability of individuals to initiate timely preventive or preemptive treatment. Short-term probabilistic forecasting of migraine risk could enable more targeted management strategies. ObjectiveTo externally validate the previously developed Headache Prediction Model (HAPRED-I), evaluate an updated continuously learning model (HAPRED-II), and assess the feasibility and short-term safety of delivering individualized probabilistic migraine forecasts directly to patients. Design, Setting, and ParticipantsProspective 8-week cohort study conducted remotely at two academic medical centers in the United States (Massachusetts General Hospital and Wake Forest Health Sciences) between 2015 and 2019. Adults with recurrent migraine or tension-type headache completed twice-daily electronic diaries. A total of 230 participants contributed 23,335 diary entries across 11,862 participant-days of observation. Main Outcomes and MeasuresOccurrence of a headache attack within 24 hours following each evening diary entry. Model performance was evaluated using discrimination (area under the receiver operating characteristic curve [AUC]) and calibration. ResultsExternal validation of HAPRED-I demonstrated modest discrimination (AUC, 0.59; 95% CI, 0.57-0.61) and poor calibration, with predicted probabilities consistently exceeding observed headache risk. In contrast, the continuously updating HAPRED-II model demonstrated progressive improvement in predictive performance as participant-specific data accumulated. Discrimination increased from an AUC of 0.59 (95% CI, 0.57-0.61) during the first 14 days to 0.66 (95% CI, 0.63-0.70) after the first month, accompanied by improved calibration across predicted risk levels. Over the study period, 6999 individualized forecasts were delivered directly to participants. No evidence suggested that receipt of forecasts was associated with increasing headache frequency or worsening predicted headache risk trajectories. Conclusions and RelevanceA static migraine forecasting model demonstrated limited transportability to new individuals. In contrast, models that continuously update within individuals may improve predictive accuracy over time and enable real-time delivery of personalized migraine risk forecasts. Further work incorporating richer physiologic and contextual predictors will likely be necessary before such systems can reliably guide clinical treatment decisions.
Candia-Rivera, D.; Carrion-Falgarona, S.; Chavez, M.; de Vico Fallani, F.; Charpier, S.; Mahon, S.
Show abstract
BackgroundGlobal cerebral anoxia is a leading cause of death and resuscitated patients often remained persistently affected by neurological deficits. While previous studies suggest that brain-heart electrophysiological interactions may predict severity and prognosis after hypoxic brain injury coma, little is known about the brain-heart dynamics at near-death. Gaining insight into these mechanisms is crucial for developing targeted interventions in critical conditions. ResultsUsing a rodent model of reversible systemic anoxia (n=29, male and female rats), we investigated whether brain-heart interactions during the asphyxia onset could predict the return of brain electrical activities after resuscitation. Electrophysiological recordings confirmed that cerebral activity declines following asphyxia, coinciding with increased heart rate variability. Notably, the strong coupling between cardiac parasympathetic activity and high-frequency brain activity in the somatosensory cortex and hippocampus serves as a key predictor of a favorable outcome. ConclusionOur study underscores the involvement of the brain-heart axis mechanisms in the physiology of dying and the potential prognostic significance of these mechanisms, paving the way for translational research into critical care, based on new characterizations of cardiac reflexes and brain-heart interactions.
Ekenze, O.; Scott, M. R.; Himali, D.; Lioutas, V.-A.; Seshadri, S.; Howard, V. J.; Fornage, M.; Aparicio, H. J.; Beiser, A. S.; Romero, J. R.
Show abstract
Sex specific differences in stroke are recognized. Whether differences in incident stroke risk persists in recent periods needs further elucidation to aid public health preventive efforts. Aim: To determine long-term sex specific trends in stroke and stroke risk factors at different epochs among Framingham Heart Study participants. Methods: We examined age-adjusted 10-year stroke incidence using Cox regression in women and men in five epochs: 1962-1969 (epoch 1, reference), 1971-1976 (epoch 2), 1987-1991 (epoch 3), 1998-2005 (epoch 4), 2015-2021 (epoch 5). We compared stroke incidence by sex across epochs, estimated decade-wise linear trends overall and by sex. We compared risk factors in successive epochs to the first, and estimated sex-specific trends in risk factors. Interactions between baseline risk factors with epoch and trends were assessed by sex. Secondary analyses were repeated in participants <60 years old. Results: Incident stroke occurred in 4.5% (178/3996) in epoch 1, 3.9% (227/5786) in epoch 2, 3.9% (199/5137) in epoch 3, 2.7% (207/7642) in epoch 4, 2.2% (119/5534) in epoch 5. Men had higher risk of incident stroke in each epoch with significant difference in epochs 2 (HR 1.41, 95% CI [1.08, 1.84]) and 4 (HR 1.46, 95% CI [1.11, 1.91]) overall, and in epoch 4 (HR 2.13, 95% CI [1.17, 3.87]) among those <60 years. Stroke incidence declined by 16% per decade in men (HR 0.84, 95% CI [0.79, 0.89]) and 19% per decade in women (HR 0.81, 95% CI [0.76, 0.86]). Among those <60 years, stroke incidence declined by 22% per decade in women (HR 0.78, 95% CI [0.67, 0.95]). Hypertension declined by 8% per decade in women only ([OR] 0.92, 95% CI [0.90, 0.94]), while Atrial fibrillation and diabetes increased in both. Conclusion: Stroke incidence continues to decline in recent periods for women and men. Among participants <60 years, decline was observed only in women, possibly related to decline in hypertension in women.